动态磁共振成像(MRI)是一种流行的医学成像技术,可生成组织和器官内部对比度材料流动的图像序列。但是,仅在少数可行性研究中证明了它在通过食道运动中的成像运动中的应用,并且相对尚未探索。在这项工作中,我们提出了一个称为力学的MRI(MRI-MEC)的计算框架,该计算框架增强了该能力,从而增加了动态MRI在诊断食管疾病中的适用性。菠萝汁用作动态MRI的吞咽对比材料,MRI图像序列被用作MRI-MECH的输入。 MRI-MECH将食道建模为柔性的一维管,弹性管壁遵循线性管定律。然后,通过一维质量和动量保护方程式,通过食道流动。这些方程是使用物理信息的神经网络(PINN)求解的。 PINN最大程度地减少了MRI测量和模型预测之间的差异,以确保始终遵循流体流量问题的物理。 MRI-Mech计算了食管转运期间的流体速度和压力,并通过计算壁刚度和主动弛豫来估计食道健康的机械健康。此外,MRI-Mech预测了在排空过程中有关下食管下括约肌的缺失信息,这证明了其适用于缺少数据或图像分辨率差的方案。除了基于食管机械健康的定量估计值来改善临床决策外,MRI-MECH还可以增强用于应用其他医学成像方式以增强其功能。
translated by 谷歌翻译
食管障碍的发病机制与食管壁力学有关。因此,要了解各种食管障碍背后的潜在基本机制,将基于食管壁力学的参数映射到与改变的推注途径和超级性IBP对应的生理和病理生理学条件至关重要。在这项工作中,我们提出了一种混合框架,将流体力学和机器学习结合,以识别各种食管障碍的底层物理,并将它们映射到我们称之为虚拟疾病景观(VDL)的参数空间上。一维逆模型处理来自食道诊断装置的输出,称为内窥镜功能腔成像探针(endoflip)来估计食道的机械“健康”,通过预测一组基于机械基的参数,例如食道壁刚度,肌肉收缩食管墙的模式和活跃放松。然后使用基于机械基的参数来训练由改变空间(VAE)组成的神经网络,其产生潜在空间和侧面网络,该侧面网络预测用于估计食道古代结动性的机械工作度量。潜在的矢量以及一组基于基于机械的参数定义VDL并形成与各种食管疾病相对应的簇。 VDL不仅区分不同的疾病,而且还可用于预测疾病进展及时。最后,我们还证明了该框架的临床适用性,用于估算治疗后治疗和追踪患者状况的有效性。
translated by 谷歌翻译
This paper presents a novel approach to the acquisition of language models from corpora. The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts that used a tabular, attribute-value encoding of training cases and concepts, making it unsuitable for sequential input like language. In response, we explore three new extensions to Cobweb -- the Word, Leaf, and Path variants. These systems encode each training case as an anchor word and surrounding context words, and they store probabilistic descriptions of concepts as distributions over anchor and context information. As in the original Cobweb, a performance element sorts a new instance downward through the hierarchy and uses the final node to predict missing features. Learning is interleaved with performance, updating concept probabilities and hierarchy structure as classification occurs. Thus, the new approaches process training cases in an incremental, online manner that it very different from most methods for statistical language learning. We examine how well the three variants place synonyms together and keep homonyms apart, their ability to recall synonyms as a function of training set size, and their training efficiency. Finally, we discuss related work on incremental learning and directions for further research.
translated by 谷歌翻译
The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
我们提出了连续表示的时间扩展变化,我们称其为t-SR。 T-SR通过在原始动作重复序列上构造后继表示,捕获了时间扩展动作的预期状态过渡动力学。这种时间抽象的这种形式不能学习相关任务结构的自上而下的层次结构,而是对耦合动作和动作重复的自下而上的组成。这减少了在没有学习层次政策的情况下控制中所需的决策数量。因此,T-SR直接考虑了时间扩展的动作序列的时间范围,而无需预定义或域特异性选项。我们表明,在具有动态奖励结构的环境中,T-SR能够利用后继表示的灵活性和时间扩展的动作提供的抽象。因此,在一系列稀疏的网格世界环境中,T-SR最佳地适应策略远比基于可比的无模型的强化学习方法快得多。我们还表明,T-SR学到的解决这些任务的方式要求学习的策略的始终如一的频率比非临时扩展的策略少。
translated by 谷歌翻译
基于时空的图(STMAP)方法显示出为车辆轨迹重建处理高角度视频的巨大潜力,可以满足各种数据驱动的建模和模仿学习应用的需求。在本文中,我们开发了时空深嵌入(STDE)模型,该模型在像素和实例水平上施加了平等约束,以生成用于STMAP上车辆条纹分割的实例感知嵌入。在像素级别上,每个像素在不同范围的8-邻居像素进行编码,随后使用该编码来指导神经网络学习嵌入机制。在实例级别上,歧视性损耗函数被设计为将属于同一实例的像素更接近,并将不同实例的平均值分开。然后,通过静脉 - 沃特算法算法优化时空亲和力的输出,以获得最终的聚类结果。基于分割指标,我们的模型优于其他五个用于STMAP处理的基线,并在阴影,静态噪声和重叠的影响下显示出稳健性。该设计的模型用于处理所有公共NGSIM US-101视频,以生成完整的车辆轨迹,表明具有良好的可扩展性和适应性。最后但并非最不重要的一点是,讨论了带有STDE和未来方向的扫描线方法的优势。代码,STMAP数据集和视频轨迹在在线存储库中公开可用。 github链接:shorturl.at/jklt0。
translated by 谷歌翻译